Google has issued a statement regarding the “embarrassing and incorrect” images produced by the Gemini artificial intelligence tool. In a blog post on Friday, Google acknowledges that its model generated “incorrect historical” images due to calibration issues.
Google releases statement on racist images generated by Gemini
Many users caught Gemini producing racist content earlier this week, including various Nazis and Founding Fathers of the United States. Google’s senior vice president Prabhakar Raghavan stated in a blog post that the calibration to show various people with Gemini did not account for situations where diversity should definitely not be depicted.
Additionally, he mentioned that “over time, the model has become much more cautious than our intent, misinterpreting some very innocent requests sensitively,” resulting in Gemini AI overcompensating in some cases, as seen in images of racist Nazis.
Furthermore, this led to Gemini becoming “overly conservative,” refusing to produce certain requests such as images of “a black person” or “a white person.” Raghavan stated in the blog post that Google is “sorry that the feature isn’t working well.
“The company wants Gemini to “work well for everyone,” meaning it should generate depictions of different types of people (including different ethnicities) when depicting “football players” or “someone walking a dog.”
Raghavan says: However, when you ask Gemini for an image of specific types of people or people in specific cultural or historical contexts, such as “a black teacher in a classroom” or “a white veterinarian with a dog,” you should absolutely get a response that accurately reflects what you’re asking for.
Google ends Gemini’s service due to wrong images before it even started
“The company ceased allowing people to generate images with the Gemini AI tool on February 22nd. This decision came just weeks after launching the image generation feature.Raghavan mentions that the company will continue to test image generation capabilities and will strive to “substantially improve” before re-enabling it.
Raghavan states, “As we’ve said from the beginning, hallucinations are a known challenge with all large language models, and artificial intelligence sometimes gets things wrong. This is something we’re continuously working on.”What are your thoughts on this issue? You can share your opinions in the comments below.
{{user}} {{datetime}}
{{text}}